Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add filters

Main subject
Language
Document Type
Year range
1.
24th International Conference on Human-Computer Interaction, HCII 2022 ; 13518 LNCS:441-460, 2022.
Article in English | Scopus | ID: covidwho-2173820

ABSTRACT

This paper presents a user-centered approach to translating techniques and insights from AI explainability research to developing effective explanations of complex issues in other fields, on the example of COVID-19. We show how the problem of AI explainability and the explainability problem in the COVID-19 pandemic are related: as two specific instances of a more general explainability problem, occurring when people face in-transparent, complex systems and processes whose functioning is not readily observable and understandable to them ("black boxes”). Accordingly, we discuss how we applied an interdisciplinary, user-centered approach based on Design Thinking to develop a prototype of a user-centered explanation for a complex issue regarding people's perception of COVID-19 vaccine development. The developed prototype demonstrates how AI explainability techniques can be adapted and integrated with methods from communication science, visualization and HCI to be applied to this context. We also discuss results from a first evaluation in a user study with 88 participants and outline future work. The results indicate that it is possible to effectively apply methods and insights from explainable AI to explainability problems in other fields and support the suitability of our conceptual framework to inform that. In addition, we show how the lessons learned in the process provide new insights for informing further work on user-centered approaches to explainable AI itself. © 2022, The Author(s).

2.
Sensors (Basel) ; 22(24)2022 Dec 18.
Article in English | MEDLINE | ID: covidwho-2163571

ABSTRACT

The novel coronavirus (COVID-19), which emerged as a pandemic, has engulfed so many lives and affected millions of people across the world since December 2019. Although this disease is under control nowadays, yet it is still affecting people in many countries. The traditional way of diagnosis is time taking, less efficient, and has a low rate of detection of this disease. Therefore, there is a need for an automatic system that expedites the diagnosis process while retaining its performance and accuracy. Artificial intelligence (AI) technologies such as machine learning (ML) and deep learning (DL) potentially provide powerful solutions to address this problem. In this study, a state-of-the-art CNN model densely connected squeeze convolutional neural network (DCSCNN) has been developed for the classification of X-ray images of COVID-19, pneumonia, normal, and lung opacity patients. Data were collected from different sources. We applied different preprocessing techniques to enhance the quality of images so that our model could learn accurately and give optimal performance. Moreover, the attention regions and decisions of the AI model were visualized using the Grad-CAM and LIME methods. The DCSCNN combines the strength of the Dense and Squeeze networks. In our experiment, seven kinds of classification have been performed, in which six are binary classifications (COVID vs. normal, COVID vs. lung opacity, lung opacity vs. normal, COVID vs. pneumonia, pneumonia vs. lung opacity, pneumonia vs. normal) and one is multiclass classification (COVID vs. pneumonia vs. lung opacity vs. normal). The main contributions of this paper are as follows. First, the development of the DCSNN model which is capable of performing binary classification as well as multiclass classification with excellent classification accuracy. Second, to ensure trust, transparency, and explainability of the model, we applied two popular Explainable AI techniques (XAI). i.e., Grad-CAM and LIME. These techniques helped to address the black-box nature of the model while improving the trust, transparency, and explainability of the model. Our proposed DCSCNN model achieved an accuracy of 98.8% for the classification of COVID-19 vs normal, followed by COVID-19 vs. lung opacity: 98.2%, lung opacity vs. normal: 97.2%, COVID-19 vs. pneumonia: 96.4%, pneumonia vs. lung opacity: 95.8%, pneumonia vs. normal: 97.4%, and lastly for multiclass classification of all the four classes i.e., COVID vs. pneumonia vs. lung opacity vs. normal: 94.7%, respectively. The DCSCNN model provides excellent classification performance consequently, helping doctors to diagnose diseases quickly and efficiently.


Subject(s)
COVID-19 , Humans , COVID-19/diagnostic imaging , Artificial Intelligence , X-Rays , Neural Networks, Computer
3.
Ieee Transactions on Computational Social Systems ; : 14, 2022.
Article in English | Web of Science | ID: covidwho-1895932

ABSTRACT

Fake news is a major threat to democracy (e.g., influencing public opinion), and its impact cannot be understated particularly in our current socially and digitally connected society. Researchers from different disciplines (e.g., computer science, political science, information science, and linguistics) have also studied the dissemination, detection, and mitigation of fake news;however, it remains challenging to detect and prevent the dissemination of fake news in practice. In addition, we emphasize the importance of designing artificial intelligence (AI)-powered systems that are capable of providing detailed, yet user-friendly, explanations of the classification / detection of fake news. Hence, in this article, we systematically survey existing state-of-the-art approaches designed to detect and mitigate the dissemination of fake news, and based on the analysis, we discuss several key challenges and present a potential future research agenda, especially incorporating AI explainable fake news credibility system.

SELECTION OF CITATIONS
SEARCH DETAIL